404 research outputs found

    Rational design of a polyoxometalate intercalated layered double hydroxide: highly efficient catalytic epoxidation of allylic alcohols under mild and solvent-free conditions

    Get PDF
    Intercalation catalysts, owing to their modular and accessible gallery and unique interlamellar chemical environment, have shown wide application in various catalytic reactions. However, the poor mass transfer between the active components of the intercalated catalysts and organic substrates is one of the challenges that limit their further application. Herein, we have developed a novel heterogeneous catalyst by intercalating the polyoxometalate (POM) of Na9LaW10O36⋅32 H2O (LaW10) into layered double hydroxides (LDHs), which have been covalently modified with ionic liquids (ILs). The intercalation catalyst demonstrates high activity and selectivity for the epoxidation of various allylic alcohols in the presence of H2O2. For example, trans-2-hexen-1-ol undergoes up to 96 % conversion and 99 % epoxide selectivity at 25 °C in 2.5 h. To the best of our knowledge, the Mg3Al−ILs−C8−LaW10 composite material constitutes one of the most efficient heterogeneous catalysts for the epoxidation of allylic alcohols (including the hydrophobic allylic alcohols with long alkyl chains) reported so far

    Massively Parallel Algorithms for the Stochastic Block Model

    Full text link
    Learning the community structure of a large-scale graph is a fundamental problem in machine learning, computer science and statistics. We study the problem of exactly recovering the communities in a graph generated from the Stochastic Block Model (SBM) in the Massively Parallel Computation (MPC) model. Specifically, given knkn vertices that are partitioned into kk equal-sized clusters (i.e., each has size nn), a graph on these knkn vertices is randomly generated such that each pair of vertices is connected with probability~pp if they are in the same cluster and with probability qq if not, where p>q>0p > q > 0. We give MPC algorithms for the SBM in the (very general) \emph{ss-space MPC model}, where each machine has memory s=Ω(logn)s=\Omega(\log n). Under the condition that pqpΩ~(k12n12+12(r1))\frac{p-q}{\sqrt{p}}\geq \tilde{\Omega}(k^{\frac12}n^{-\frac12+\frac{1}{2(r-1)}}) for any integer r[3,O(logn)]r\in [3,O(\log n)], our first algorithm exactly recovers all the kk clusters in O(krlogsn)O(kr\log_s n) rounds using O~(m)\tilde{O}(m) total space, or in O(rlogsn)O(r\log_s n) rounds using O~(km)\tilde{O}(km) total space. If pqpΩ~(k34n14)\frac{p-q}{\sqrt{p}}\geq \tilde{\Omega}(k^{\frac34}n^{-\frac14}), our second algorithm achieves O(logsn)O(\log_s n) rounds and O~(m)\tilde{O}(m) total space complexity. Both algorithms significantly improve upon a recent result of Cohen-Addad et al. [PODC'22], who gave algorithms that only work in the \emph{sublinear space MPC model}, where each machine has local memory~s=O(nδ)s=O(n^{\delta}) for some constant δ>0\delta>0, with a much stronger condition on p,q,kp,q,k. Our algorithms are based on collecting the rr-step neighborhood of each vertex and comparing the difference of some statistical information generated from the local neighborhoods for each pair of vertices. To implement the clustering algorithms in parallel, we present efficient approaches for implementing some basic graph operations in the ss-space MPC model

    Massively Parallel Algorithms for the Stochastic Block Model

    Get PDF
    Learning the community structure of a large-scale graph is a fundamental problem in machine learning, computer science and statistics. Among others, the Stochastic Block Model (SBM) serves a canonical model for community detection and clustering, and the Massively Parallel Computation (MPC) model is a mathematical abstraction of real-world parallel computing systems, which provides a powerful computational framework for handling large-scale datasets. We study the problem of exactly recovering the communities in a graph generated from the SBM in the MPC model. Specifically, given kn vertices that are partitioned into k equal-sized clusters (i.e., each has size n), a graph on these kn vertices is randomly generated such that each pair of vertices is connected with probability p if they are in the same cluster and with probability q if not, where p > q > 0. We give MPC algorithms for the SBM in the (very general) s-space MPC model, where each machine is guaranteed to have memory s = ?(log n). Under the condition that (p-q)/?p ? ??(k^{1/2} n^{-1/2+1/(2(r-1))}) for any integer r ? [3,O(log n)], our first algorithm exactly recovers all the k clusters in O(kr log_s n) rounds using O?(m) total space, or in O(rlog_s n) rounds using O?(km) total space. If (p-q)/?p ? ??(k^{3/4} n^{-1/4}), our second algorithm achieves O(log_s n) rounds and O?(m) total space complexity. Both algorithms significantly improve upon a recent result of Cohen-Addad et al. [PODC\u2722], who gave algorithms that only work in the sublinear space MPC model, where each machine has local memory s = O(n^?) for some constant ? > 0, with a much stronger condition on p,q,k. Our algorithms are based on collecting the r-step neighborhood of each vertex and comparing the difference of some statistical information generated from the local neighborhoods for each pair of vertices. To implement the clustering algorithms in parallel, we present efficient approaches for implementing some basic graph operations in the s-space MPC model

    Parameterization of modeling subsurface hydrocarbon contamination and biosurfactant enhanced remediation processes

    Get PDF
    Subsurface hydrocarbon contamination caused by accidental spills or operational leakages of petroleum products is a global environmental concern. In order to cost-effectively and eco-friendly recover the contaminated sites, biosurfactant enhanced aquifer remediation (BSEAR) technologies have become a popular subject in both research and practice. However, the inherent uncertainties and complexities of the subsurface systems make it challenging in numerical simulation of the hydrocarbon transport and fate as well as remediation processes. Efforts in developing more efficient and robust parameterization approaches for such modeling purpose, therefore, are highly desired. This research aims to help fill the gap by developing a novel hybrid stochastic – design of experiment aided parameterization (HSDP) method for modeling BSEAR processes. The method was developed and tested based on an integrated physical and numerical modeling system comprised of a set of intermediate scale flow cells (ISFCs) and a numerical simulator named BioF&T 3D. Generally, the HSDP method was performed by: 1) building the design of experiment (DOE) models based on screened parameters and defined responses, which could reflect the goodness of fit between observed and simulated data; 2) identifying the and interactions among parameters and their significance; 3) optimizing the DOE predicted responses; 4) introducing stochastic data within reduced intervals based on the optimized parameters; 5) running Monte Carlo simulation to find the optimal responses with the corresponding combinations of parameters. The flow cell tests proved that the HSDP method could improve both efficiency and robustness of modeling parameterization and significantly reduce the computational demand without compromising the effectiveness in quantifying parameter interactions and uncertainties. Furthermore, a specific lab synthetized surfactin was applied in this study. The effect of dissolution enhancement was observed from parallel flow cell experiments especially during the first 12 hours following the initial hydrocarbon release. The HSDP method was demonstrated to be capable of advancing BioF&T 3D, which lacks the capacity of simulating surfactant. By incorporating the HSDP method, the BSEAR processes were effectively simulated with a satisfactory overall goodness of fit (R² = 0.76, 0.81, 0.83, and 0.81 for benzene, toluene, ethylbenzene, and xylene, respectively). The enhanced dissolution effect was also reflected in the modeling parameterization by increasing the first 12 hours hydrocarbon loading ratio (12LR) compared to non-biosurfactant processes. This research developed a new parameterization method HSDP, which is capable of revealing interactions of parameters, as well as quantifying their uncertainties, in a robust and efficient manner. Also, using this method, this study initiated the attempts to advance simpler numerical models in simulating complicated BSEAR processes, which is particularly attractive for the potential applications in practice

    Controlling atom-photon bound states in a coupled resonator array with a two-level quantum emitter

    Full text link
    We consider a one-dimensional (1D) coupled-resonator array (CRA), where a two-level quantum emitter (2LE) is electric-dipole coupled to the modes of two adjacent resonators. We investigate the energy spectrum, the photon probability distribution of the bound states and the emission process of the 2LE into the CRA vacuum. A quantum phase transition is found which is characterized by the change of the number of the out-of-band discrete levels. The condition for this change is also presented. The photon wave functions of bound states are found to be asymmetry around the position of the 2LE when the coupling strengths between the 2LE and the resonator are not equal, and they have the same preferred directions which are primary determined by the larger one among the coupling strengths. The presence of the atom-photon bound states is manifested in the form of a stationary oscillation or a non-vanishing constant in the long enough time.Comment: 5pages, 6 figure

    A hazard analysis via an improved timed colored petri net with time–space coupling safety constraint

    Get PDF
    AbstractPetri nets are graphical and mathematical tools that are applicable to many systems for modeling, simulation, and analysis. With the emergence of the concept of partitioning in time and space domains proposed in avionics application standard software interface (ARINC 653), it has become difficult to analyze time–space coupling hazards resulting from resource partitioning using classical or advanced Petri nets. In this paper, we propose a time–space coupling safety constraint and an improved timed colored Petri net with imposed time–space coupling safety constraints (TCCP-NET) to fill this requirement gap. Time–space coupling hazard analysis is conducted in three steps: specification modeling, simulation execution, and results analysis. A TCCP-NET is employed to model and analyze integrated modular avionics (IMA), a real-time, safety-critical system. The analysis results are used to verify whether there exist time–space coupling hazards at runtime. The method we propose demonstrates superior modeling of safety-critical real-time systems as it can specify resource allocations in both time and space domains. TCCP-NETs can effectively detect underlying time–space coupling hazards
    corecore